Goto

Collaborating Authors

 image explanation


Evaluating the Utility of Model Explanations for Model Development

Im, Shawn, Andreas, Jacob, Zhou, Yilun

arXiv.org Artificial Intelligence

One of the motivations for explainable AI is to allow humans to make better and more informed decisions regarding the use and deployment of AI models. But careful evaluations are needed to assess whether this expectation has been fulfilled. Current evaluations mainly focus on algorithmic properties of explanations, and those that involve human subjects often employ subjective questions to test human's perception of explanation usefulness, without being grounded in objective metrics and measurements. In this work, we evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development. We conduct a mixed-methods user study involving image data to evaluate saliency maps generated by SmoothGrad, GradCAM, and an oracle explanation on two tasks: model selection and counterfactual simulation. To our surprise, we did not find evidence of significant improvement on these tasks when users were provided with any of the saliency maps, even the synthetic oracle explanation designed to be simple to understand and highly indicative of the answer. Nonetheless, explanations did help users more accurately describe the models. These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.


The Ability of Image-Language Explainable Models to Resemble Domain Expertise

Werner, Petrus, Zapaishchykova, Anna, Ratan, Ujjwal

arXiv.org Artificial Intelligence

Recent advances in vision and language (V+L) models have a promising impact in the healthcare field. However, such models struggle to explain how and why a particular decision was made. In addition, model transparency and involvement of domain expertise are critical success factors for machine learning models to make an entrance into the field. In this work, we study the use of the local surrogate explainability technique to overcome the problem of black-box deep learning models. We explore the feasibility of resembling domain expertise using the local surrogates in combination with an underlying V+L to generate multi-modal visual and language explanations. We demonstrate that such explanations can serve as helpful feedback in guiding model training for data scientists and machine learning engineers in the field.


Fast Hierarchical Games for Image Explanations

#artificialintelligence

Finally, PartitionExplainer employs a hierarchical clustering approach to compute Owen's coefficients We remark that one of the most important details of any explanation method based on feature removal is the baseline, which defines the value that XC takes in the entries not in C.